Skip to content

Conversation

@julienmancuso
Copy link
Contributor

@julienmancuso julienmancuso commented Aug 16, 2025

Overview:

do not set resources by default

Summary by CodeRabbit

  • New Features

    • Adjusted resource configuration across components for more flexible scaling: Frontend and Planner no longer enforce CPU/Memory limits (requests unchanged). Worker retains GPU limit while removing CPU/Memory limits (requests unchanged).
    • Updated default expectations for leader/worker sets to include explicit CPU and Memory limits alongside GPU where applicable.
  • Tests

    • Updated test expectations to reflect the new resource configurations and limits behavior.

Copy link
Contributor

@tedzhouhk tedzhouhk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Support this change, but better to have k8s experts to review the code.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 16, 2025

Walkthrough

The changes adjust container resource specifications across Dynamo components: Frontend and Planner drop CPU/memory limits; Worker retains only a GPU limit; and tests are updated accordingly. The LeaderWorkerSet controller test updates expected Pod resources to include CPU, memory, and GPU limits. No public APIs or control flow are changed.

Changes

Cohort / File(s) Summary of changes
Tests
deploy/cloud/operator/internal/controller/dynamocomponentdeployment_controller_test.go, deploy/cloud/operator/internal/dynamo/component_planner_test.go
Updated expectations: controller test now asserts CPU (10), Memory (20Gi), and GPU (1) limits in LeaderWorkerSet; planner test removes resource Limits from expected container.
Components: Frontend
deploy/cloud/operator/internal/dynamo/component_frontend.go
Removed CPU and Memory Limits from container.Resources; Requests remain.
Components: Planner
deploy/cloud/operator/internal/dynamo/component_planner.go
Removed CPU and Memory Limits from PlannerDefaults.GetBaseContainer; Requests retained.
Components: Worker
deploy/cloud/operator/internal/dynamo/component_worker.go
Removed CPU and Memory Limits; kept GPU limit (nvidia.com/gpu: 1). Requests unchanged (CPU: 10, Memory: 20Gi).

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Poem

I nibbled the specs with whiskered delight,
Trimmed hard caps, left requests light.
Frontend and Planner, freer to roam,
Worker keeps one GPU chrome.
Tests now tally CPU and RAM true—
Thump-thump! a tidy warren for pods to chew. 🐇✨

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.2.2)

Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/product/migration-guide for migration instructions
The command is terminated due to an error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/product/migration-guide for migration instructions

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
deploy/cloud/operator/internal/dynamo/component_worker.go (1)

45-45: Nit: Use a typed ResourceName for the GPU key

Minor readability/consistency improvement and avoids any ambiguity around typed keys in ResourceList.

Apply this diff:

-           "nvidia.com/gpu": resource.MustParse("1"),
+           corev1.ResourceName("nvidia.com/gpu"): resource.MustParse("1"),
deploy/cloud/operator/internal/controller/dynamocomponentdeployment_controller_test.go (1)

708-711: Add a test for the default case (no CPU/mem limits in spec)

To guard the new default, add a case where spec.limits omits CPU/memory and assert the generated Pod’s container.Resources.Limits includes only GPU (and no CPU/memory). Also verify the /dev/shm EmptyDir size handling when no memory limit is present (should either be unset or follow the intended fallback).

Here’s a minimal example you can adapt:

func Test_generateLeaderWorkerSet_defaults_NoCpuMemLimits(t *testing.T) {
  g := gomega.NewWithT(t)
  s := scheme.Scheme
  _ = v1alpha1.AddToScheme(s)
  _ = corev1.AddToScheme(s)
  _ = leaderworkersetv1.AddToScheme(s)

  dcd := &v1alpha1.DynamoComponentDeployment{
    ObjectMeta: metav1.ObjectMeta{Name: "default-lws", Namespace: "default"},
    Spec: v1alpha1.DynamoComponentDeploymentSpec{
      DynamoComponent: "comp", DynamoTag: "tag",
      DynamoComponentDeploymentSharedSpec: v1alpha1.DynamoComponentDeploymentSharedSpec{
        ComponentType:   string(commonconsts.ComponentTypeWorker),
        ServiceName:     "svc",
        DynamoNamespace: ptr.To("default"),
        Multinode:       &v1alpha1.MultinodeSpec{NodeCount: 2},
        Resources: &common.Resources{
          Requests: &common.ResourceItem{CPU: "300m", Memory: "500Mi"},
          Limits:   &common.ResourceItem{GPU: "1"}, // no CPU/memory limits
        },
        ExtraPodSpec: &dynamoCommon.ExtraPodSpec{
          MainContainer: &corev1.Container{Image: "test:latest"},
        },
      },
    },
  }

  fakeClient := fake.NewClientBuilder().WithScheme(s).WithObjects(dcd).Build()
  r := &DynamoComponentDeploymentReconciler{
    Client:   fakeClient,
    Recorder: record.NewFakeRecorder(10),
  }

  got, _, err := r.generateLeaderWorkerSet(context.Background(), generateResourceOption{
    dynamoComponentDeployment: dcd,
    instanceID:                ptr.To(0),
  })
  g.Expect(err).ToNot(gomega.HaveOccurred())

  // Ensure only GPU is limited; CPU/memory limits are absent
  limits := got.Spec.LeaderWorkerTemplate.LeaderTemplate.Spec.Containers[0].Resources.Limits
  g.Expect(limits).To(gomega.HaveKeyWithValue(corev1.ResourceName("nvidia.com/gpu"), resource.MustParse("1")))
  g.Expect(limits).ToNot(gomega.HaveKey(corev1.ResourceCPU))
  g.Expect(limits).ToNot(gomega.HaveKey(corev1.ResourceMemory))
}
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between eaf42eb and 6f93925.

📒 Files selected for processing (5)
  • deploy/cloud/operator/internal/controller/dynamocomponentdeployment_controller_test.go (1 hunks)
  • deploy/cloud/operator/internal/dynamo/component_frontend.go (0 hunks)
  • deploy/cloud/operator/internal/dynamo/component_planner.go (0 hunks)
  • deploy/cloud/operator/internal/dynamo/component_planner_test.go (0 hunks)
  • deploy/cloud/operator/internal/dynamo/component_worker.go (1 hunks)
💤 Files with no reviewable changes (3)
  • deploy/cloud/operator/internal/dynamo/component_planner_test.go
  • deploy/cloud/operator/internal/dynamo/component_frontend.go
  • deploy/cloud/operator/internal/dynamo/component_planner.go
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: julienmancuso
PR: ai-dynamo/dynamo#1474
File: deploy/cloud/operator/internal/controller/dynamocomponent_controller.go:1308-1312
Timestamp: 2025-06-11T21:29:28.650Z
Learning: User julienmancuso expects replies in English; avoid switching languages unless explicitly requested.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build and Test - dynamo
🔇 Additional comments (3)
deploy/cloud/operator/internal/dynamo/component_worker.go (2)

45-45: LGTM: CPU/memory limits dropped; GPU limit retained

This aligns with the PR objective (“do not set cpu/mem limit by default”) while keeping the GPU limit.


45-45: GPU requests for nvidia.com/gpu rely on API defaulting
We don’t explicitly set Requests[nvidia.com/gpu] in component_worker.go (line 45), so the Kubernetes API must default requests to match limits for extended resources. Confirm that your target clusters support this behavior (Kubernetes ≥1.8). If not, mirror the GPU limit into requests in code to avoid unschedulable pods.

• component_worker.go:45 – only “nvidia.com/gpu” is set under Limits, no matching Requests.

deploy/cloud/operator/internal/controller/dynamocomponentdeployment_controller_test.go (1)

708-711: LGTM: Test now expects CPU/memory/GPU limits when specified in the spec

Good adjustment to validate explicit limits provided via DynamoComponentDeploymentSpec.

@julienmancuso julienmancuso changed the title fix: do not set cpu/mem limit by default fix: do not set default for resources Aug 18, 2025
@julienmancuso julienmancuso enabled auto-merge (squash) August 18, 2025 20:27
@julienmancuso julienmancuso merged commit 56e9923 into main Aug 18, 2025
13 of 16 checks passed
@julienmancuso julienmancuso deleted the jsm/dep-316 branch August 18, 2025 20:50
hhzhang16 pushed a commit that referenced this pull request Aug 27, 2025
Signed-off-by: Hannah Zhang <hannahz@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants